Click this link and use my code TECHWITHTIM to get 25% off your first payment for boot.dev.
If you're not running LLMs locally, then you're missing out. ChatGPT and other hosted solutions are great, but if you care about speed, privacy and cost, then you'll want to learn how to run them on your own machine. In this video, I'll show you two methods of running LLMs locally from a developer perspective.
DevLaunch is my mentorship program where I personally help developers go beyond tutorials, build real-world projects, and actually land jobs. No fluff. Just real accountability, proven strategies, and hands-on guidance. Learn more here -
🎞 Video Resources 🎞
Download Ollama:
Ollama Library:
Ollama GitHub:
Docker Model Runner Full Video:
⏳ Timestamps ⏳
00:00 | Overview
00:38 | Method 1 - Ollama
04:29 | Ollama from Code
08:27 | Method 2 - Docker Model Runner
12:32 | Docker Model Runner from Code
Hashtags
#Ollama #Docker #LLM
UAE Media License Number: 3635141
|
This video breaks down the key findings ...
DevLaunch is my mentorship program where...
🔥Post Graduate Program in Data Analytics...
Download your free Python Cheat Sheet he...
🔥Full Stack Java Developer Program (Disc...
Ready to break into UI/UX design or leve...
🔥Purdue - Applied Generative AI Speciali...
In this short, discover the Project Mana...
Join us on December 17 at 11am PT! Learn...
Decoupling Design in Flutter → GitHub i...
Jaspr on Observable Flutter → Buddyfind...